Design - Plan Design - Plan The stakeholders at L. B. Primary School wanted to determine if students in the Guided Reading program were demonstrating improvements in their reading skills over the course of the school year. In addition, the program had been in place for ten years and stakeholders were concerned that program fidelity had not been maintained. The purpose of this evaluation is to determine if the Guided Reading program at L. B. School is effective.
Specifically, the following evaluation question will be answered: Is Guided Reading Effective? With the following subquestions used to guide the inquiry:
Is the GR program at L. B. Primary School being delivered as intended and outlined by Fountas & Pinnell (1996)? This question is intended to address program fidelity.
Are students who participate in the program demonstrating improvements in reading skills over the course of the school year? This question is intended to address improvement of student reading skills, that is, to describe improvements, or lack there of, rather than imply causality.
To gather evaluation data, interviews and focus groups will be conducted, and specific literacy assessments (DRA and DIBELS) will be administered. Teachers, administrators, and researchers will work collaboratively to determine interview and focus group questions. Teachers and administrators will be interviewed and focus groups will be conducted with groups of five children representing the three classrooms in the evaluation. These children will be members of their classroom guided reading groups. Teachers will administer the DRA and DIBELS and researchers will conduct the interviews and focus groups. The teachers are hostile to the process therefore they will be compensated for their time. The school administration is very supportive of this evaluation and has agreed to provide release time for the teachers to participate in the interview and administer the literacy assessment measures (DRA and DIBELS), as required.
The following notations reflect our evaluation design (I have these in a table that looks way better!)
0 X 0 0 0 wherein 0 = DRA assessment, and X = participation in GR Design Notation: 1 2 3 4 (pre) GR (post)
1 2 (pre) GR (post) 0 X 0 0 0 0 0 0 0 wherein 0 = DIBELS assessment, and X = participation in GR
As reflected in the design notation, data collection will occur throughout the school year, with initial literacy assessment data being collected in September, at the beginning of the school year, before the commencement of Guided Reading instruction. Subsequent DRA assessment measures will coincide with the School's reporting periods, in November, March, and June. DIBELS assessments will be more frequent, occurring every month (with variation amongst the grades), with the exception of December and April because of the school's modified calendar and the resultant limited number of instructional days in these months. Interviews with stakeholders will occur in February and focus groups with students will be conducted in March. Please see the 'Guided Reading Program Evaluation: Data Collection Chart' for a consolidated representation of the timeline (Table 1).
New_Data_Collection_Chart_June_11.png
Table 1
The results of the literacy assessments, focus group data, and interview data will be collaboratively analyzed and interpreted by teachers (primary stakeholders) and researchers.
Triangulation Data will be triangulated through the use of two different literacy assessments (DRA and DIBELS), measuring similar constructs. As well, teacher, administrator, and student perceptions of program effectiveness and student progress will be added to the literacy assessment data to provide a thick, rich description of the situation, and to answer our overarching question: Is the Guided Reading program at L. B. Primary School effective?
Methodology As outlined in the design plan and evaluation contract, we plan to engage in a formative participatory evaluation for the purpose of gathering information to improve the GR program at L. B. Primary School. Specifically, we will investigate whether or not program fidelity has been maintained and if students participating in the program are demonstrating improvements in their reading skills over the course of the school year.
Data Analysis and Valuing
School administration has indicated that the fidelity of the program is of primary concern. The primary stakeholders together with the evaluators have determined that they will analyze and limit the number of outcomes that are listed the in logic model. Numerical data from quantitative assessments (DRA and DIBELS) will be collated and organized in spreadsheets to promote ease of management, then subsequently converted to Box and Whisker Plots to aid analysis and interpretation. The evaluation team will examine the plots to determine growth over time, identify trends or patterns within grades or across assessments, as well as overall trends that span assessments and grades.
Answers to each of the interview questions will be listed together and carefully reviewed to determine whether any responses are unrelated to the topic of the question (Killion, 2008). Focus group and interview data will be sorted and coded, as patterns, dominant themes, categories, or commonalities become apparent. Dominant themes will be presented in a narrative accompanied by tables. The evaluation team will then analyze data from the interviews and focus groups to search for patterns or themes that span the data sets (Killion, 2008).
In addition, the following data analysis techniques will be employed:
Describing – a narrative description of the program will be provided, including a description of the changes that occur from pre- to posttests of student literacy assessment scores.
Counting – the number of students who demonstrated progress in their literacy scores will be presented through a numerical description
Clustering – students who have either digressed in their literacy level, maintained their level, or improved their level
Seeking trends/patterns – commonalities, recurring patterns or trends will be identified and reported
Examination of outliers – extreme ends of the data set will be examined
Eliminating rival explanations – for example, a drop in a student’s literacy level may be better explained by low attendance than by participation in a GR group
Researchers, program practitioners, and the school administrator will work collaboratively to analyze and interpret all data to enhance understanding and increase the value placed on the analysis process (Killion, 2008).
To guide the data analysis process, the evaluation team will use a series of questions:
What patterns or trends exist in the data?
What are the overall results?
How consistent are the patterns across groups?
What are the outliers and how do they differ?
What strengths and weaknesses are evident?
What changes occurred in students’ literacy knowledge, attitudes, skills, or behaviours?
What unexpected events or outcomes occurred?
(Adapted from Killion, 2008, pp. 107-108)
Evaluators will guide stakeholders as they use the results from this evaluation to make positive changes in the program.
Clarify Goals and Objectives
When establishing our evaluation goals/objectives we ensured that they would be defined in measurable terms. It is possible to measure program fidelity by examining a number of questions:
What kind of training have teachers received in delivering a GR program?
What kind of reading materials are in place and how are they being utilized?
What time and space has been set aside for GR?
How are student assessment data used to determine reading groups?
How are GR lessons structured? (i.e. size, frequency, duration, lesson components)
What kinds of tasks are other students engaged in while GR groups are running?
Improvement of skills is measurable through the use of the DRA and DIBELS, which are to be administered four, and up to eight times respectively over the course of the school year. Teacher perceptions, administrator perceptions, as well as student perceptions of student progress will help develop a rich, thick understanding of the situation, and all data will converge to illuminate the GR program at L. B. School as we seek to answer our research question: Is Guided Reading effective?
Evaluation Contract and Timeline
Evaluation Contract
This evaluation will take place over the course of one school year in L. B. Primary School. Researchers will meet with school staff during an August organizational day and begin planning for the evaluation, including the establishment of an evaluation team, logic model development workshops to support the collaborative creation and examination of a Guided Reading Logic Model, and definition of the specific evaluation focus and guiding question(s). The evaluation team will then determine the source of evaluation data and when it will be gathered. The evaluation team decided that interviews with the school administrator and program teachers; three focus groups of five children from each of the participating classes; and literacy assessments (DRA and DIBELS), of all students in grades one to three, would be used to help answer the research question. The evaluation team will collaboratively determine the interview and focus group questions, and scheduling. Procedures for the administration of the literacy assessments (DRA and DIBELS) will be reviewed and school administration will provide training to those who are unfamiliar with the instruments. It was determined that DRA data would be gathered at four times during the year: September, November, March, and June, coinciding with the school’s existing assessment schedule. DIBELS assessments will be administered more regularly, up to eight times over the course of the school year (dependent on student grade and the assessments recommended by DIBELS). Table 2 below presents a consolidated representation of the evaluation timeline . As the evaluation is to take place in the school setting, focus groups and interviews will occur in the school conference room and individual assessments will take place in the classrooms. Teachers will be provided one day of release time for each of the data gathering periods, and a portion of one day to engage in the one-on-one interview with one of the researchers. Interviews and focus groups will be recorded so that the researchers are able to review and readily access information. A video camera will be used to record the interview and focus group sessions and all appropriate releases and permissions will be signed in advance.
Consistent with participatory evaluation design, decisions at all stages of the evaluation will be collaborations between stakeholders and researchers (Cousins & Earl, 1995b); data gathering, analysis, and interpretation will be shared endeavours. The evaluators will compile results and interpretations into a final report that stakeholders will then review to assess and ensure the accuracy of perceptions that are presented. The final report will be shared with stakeholders at an end-of-year planning meeting. Decisions about how to use the evaluation results will be discussed and researchers will guide stakeholders to use the findings in decision-making for the purpose of program improvement. Researchers will also guide stakeholders as they determine the next steps.
Design - Plan
The stakeholders at L. B. Primary School wanted to determine if students in the Guided Reading program were demonstrating improvements in their reading skills over the course of the school year. In addition, the program had been in place for ten years and stakeholders were concerned that program fidelity had not been maintained. The purpose of this evaluation is to determine if the Guided Reading program at L. B. School is effective.
Specifically, the following evaluation question will be answered: Is Guided Reading Effective? With the following subquestions used to guide the inquiry:
To gather evaluation data, interviews and focus groups will be conducted, and specific literacy assessments (DRA and DIBELS) will be administered. Teachers, administrators, and researchers will work collaboratively to determine interview and focus group questions. Teachers and administrators will be interviewed and focus groups will be conducted with groups of five children representing the three classrooms in the evaluation. These children will be members of their classroom guided reading groups. Teachers will administer the DRA and DIBELS and researchers will conduct the interviews and focus groups. The teachers are hostile to the process therefore they will be compensated for their time. The school administration is very supportive of this evaluation and has agreed to provide release time for the teachers to participate in the interview and administer the literacy assessment measures (DRA and DIBELS), as required.
The following notations reflect our evaluation design (I have these in a table that looks way better!)
0 X 0 0 0 wherein 0 = DRA assessment, and X = participation in GR
Design Notation:
1 2 3 4
(pre) GR (post)
1 2
(pre) GR (post)
0 X 0 0 0 0 0 0 0 wherein 0 = DIBELS assessment, and X = participation in GR
As reflected in the design notation, data collection will occur throughout the school year, with initial literacy assessment data being collected in September, at the beginning of the school year, before the commencement of Guided Reading instruction. Subsequent DRA assessment measures will coincide with the School's reporting periods, in November, March, and June. DIBELS assessments will be more frequent, occurring every month (with variation amongst the grades), with the exception of December and April because of the school's modified calendar and the resultant limited number of instructional days in these months. Interviews with stakeholders will occur in February and focus groups with students will be conducted in March. Please see the 'Guided Reading Program Evaluation: Data Collection Chart' for a consolidated representation of the timeline (Table 1).
Table 1
The results of the literacy assessments, focus group data, and interview data will be collaboratively analyzed and interpreted by teachers (primary stakeholders) and researchers.
Triangulation
Data will be triangulated through the use of two different literacy assessments (DRA and DIBELS), measuring similar constructs. As well, teacher, administrator, and student perceptions of program effectiveness and student progress will be added to the literacy assessment data to provide a thick, rich description of the situation, and to answer our overarching question: Is the Guided Reading program at L. B. Primary School effective?
Methodology
As outlined in the design plan and evaluation contract, we plan to engage in a formative participatory evaluation for the purpose of gathering information to improve the GR program at L. B. Primary School. Specifically, we will investigate whether or not program fidelity has been maintained and if students participating in the program are demonstrating improvements in their reading skills over the course of the school year.
Data Analysis and Valuing
School administration has indicated that the fidelity of the program is of primary concern. The primary stakeholders together with the evaluators have determined that they will analyze and limit the number of outcomes that are listed the in logic model. Numerical data from quantitative assessments (DRA and DIBELS) will be collated and organized in spreadsheets to promote ease of management, then subsequently converted to Box and Whisker Plots to aid analysis and interpretation. The evaluation team will examine the plots to determine growth over time, identify trends or patterns within grades or across assessments, as well as overall trends that span assessments and grades.
Answers to each of the interview questions will be listed together and carefully reviewed to determine whether any responses are unrelated to the topic of the question (Killion, 2008). Focus group and interview data will be sorted and coded, as patterns, dominant themes, categories, or commonalities become apparent. Dominant themes will be presented in a narrative accompanied by tables. The evaluation team will then analyze data from the interviews and focus groups to search for patterns or themes that span the data sets (Killion, 2008).
In addition, the following data analysis techniques will be employed:
Researchers, program practitioners, and the school administrator will work collaboratively to analyze and interpret all data to enhance understanding and increase the value placed on the analysis process (Killion, 2008).
To guide the data analysis process, the evaluation team will use a series of questions:
- What patterns or trends exist in the data?
- What are the overall results?
- How consistent are the patterns across groups?
- What are the outliers and how do they differ?
- What strengths and weaknesses are evident?
- What changes occurred in students’ literacy knowledge, attitudes, skills, or behaviours?
- What unexpected events or outcomes occurred?
(Adapted from Killion, 2008, pp. 107-108)Evaluators will guide stakeholders as they use the results from this evaluation to make positive changes in the program.
Clarify Goals and Objectives
When establishing our evaluation goals/objectives we ensured that they would be defined in measurable terms. It is possible to measure program fidelity by examining a number of questions:Improvement of skills is measurable through the use of the DRA and DIBELS, which are to be administered four, and up to eight times respectively over the course of the school year. Teacher perceptions, administrator perceptions, as well as student perceptions of student progress will help develop a rich, thick understanding of the situation, and all data will converge to illuminate the GR program at L. B. School as we seek to answer our research question: Is Guided Reading effective?
Evaluation Contract and Timeline
Evaluation Contract
This evaluation will take place over the course of one school year in L. B. Primary School. Researchers will meet with school staff during an August organizational day and begin planning for the evaluation, including the establishment of an evaluation team, logic model development workshops to support the collaborative creation and examination of a Guided Reading Logic Model, and definition of the specific evaluation focus and guiding question(s). The evaluation team will then determine the source of evaluation data and when it will be gathered. The evaluation team decided that interviews with the school administrator and program teachers; three focus groups of five children from each of the participating classes; and literacy assessments (DRA and DIBELS), of all students in grades one to three, would be used to help answer the research question. The evaluation team will collaboratively determine the interview and focus group questions, and scheduling. Procedures for the administration of the literacy assessments (DRA and DIBELS) will be reviewed and school administration will provide training to those who are unfamiliar with the instruments. It was determined that DRA data would be gathered at four times during the year: September, November, March, and June, coinciding with the school’s existing assessment schedule. DIBELS assessments will be administered more regularly, up to eight times over the course of the school year (dependent on student grade and the assessments recommended by DIBELS). Table 2 below presents a consolidated representation of the evaluation timeline . As the evaluation is to take place in the school setting, focus groups and interviews will occur in the school conference room and individual assessments will take place in the classrooms. Teachers will be provided one day of release time for each of the data gathering periods, and a portion of one day to engage in the one-on-one interview with one of the researchers. Interviews and focus groups will be recorded so that the researchers are able to review and readily access information. A video camera will be used to record the interview and focus group sessions and all appropriate releases and permissions will be signed in advance.
Consistent with participatory evaluation design, decisions at all stages of the evaluation will be collaborations between stakeholders and researchers (Cousins & Earl, 1995b); data gathering, analysis, and interpretation will be shared endeavours. The evaluators will compile results and interpretations into a final report that stakeholders will then review to assess and ensure the accuracy of perceptions that are presented. The final report will be shared with stakeholders at an end-of-year planning meeting. Decisions about how to use the evaluation results will be discussed and researchers will guide stakeholders to use the findings in decision-making for the purpose of program improvement. Researchers will also guide stakeholders as they determine the next steps.